Goto

Collaborating Authors

 vertex cut


A Proofs of Main Results

Neural Information Processing Systems

(conclusion 1). (conclusion 2). Z contains and only contains exogenous noises w.r.t. " means source and " Based on Theorem 6, we can readily give proof to Theorem 2. Note that in our setting where " is equivalent to " Theorem 7 (Trek-separation for directed graphical models, Theorem 2.8 in [ We now show that Theorem 2 can also be proved by trek-separation theorem: Proof of Theorem 2 (another version). 's noise components that is not shared in Therefore, the direction between X and Y is unidentifiable. GIN( Z, Y) must hold, with solution ω .




A Proofs of Main Results

Neural Information Processing Systems

(conclusion 1). (conclusion 2). Z contains and only contains exogenous noises w.r.t. " means source and " Based on Theorem 6, we can readily give proof to Theorem 2. Note that in our setting where " is equivalent to " Theorem 7 (Trek-separation for directed graphical models, Theorem 2.8 in [ We now show that Theorem 2 can also be proved by trek-separation theorem: Proof of Theorem 2 (another version). 's noise components that is not shared in Therefore, the direction between X and Y is unidentifiable. GIN( Z, Y) must hold, with solution ω .



Fast Proxy Experiment Design for Causal Effect Identification

Elahi, Sepehr, Akbari, Sina, Etesami, Jalal, Kiyavash, Negar, Thiran, Patrick

arXiv.org Artificial Intelligence

Identifying causal effects is a key problem of interest across many disciplines. The two long-standing approaches to estimate causal effects are observational and experimental (randomized) studies. Observational studies can suffer from unmeasured confounding, which may render the causal effects unidentifiable. On the other hand, direct experiments on the target variable may be too costly or even infeasible to conduct. A middle ground between these two approaches is to estimate the causal effect of interest through proxy experiments, which are conducted on variables with a lower cost to intervene on compared to the main target. Akbari et al. [2022] studied this setting and demonstrated that the problem of designing the optimal (minimum-cost) experiment for causal effect identification is NP-complete and provided a naive algorithm that may require solving exponentially many NP-hard problems as a sub-routine in the worst case. In this work, we provide a few reformulations of the problem that allow for designing significantly more efficient algorithms to solve it as witnessed by our extensive simulations. Additionally, we study the closely-related problem of designing experiments that enable us to identify a given effect through valid adjustments sets.


Independence Testing-Based Approach to Causal Discovery under Measurement Error and Linear Non-Gaussian Models

Dai, Haoyue, Spirtes, Peter, Zhang, Kun

arXiv.org Artificial Intelligence

Causal discovery aims to recover causal structures generating the observational data. Despite its success in certain problems, in many real-world scenarios the observed variables are not the target variables of interest, but the imperfect measures of the target variables. Causal discovery under measurement error aims to recover the causal graph among unobserved target variables from observations made with measurement error. We consider a specific formulation of the problem, where the unobserved target variables follow a linear non-Gaussian acyclic model, and the measurement process follows the random measurement error model. Existing methods on this formulation rely on non-scalable over-complete independent component analysis (OICA). In this work, we propose the Transformed Independent Noise (TIN) condition, which checks for independence between a specific linear transformation of some measured variables and certain other measured variables. By leveraging the non-Gaussianity and higher-order statistics of data, TIN is informative about the graph structure among the unobserved target variables. By utilizing TIN, the ordered group decomposition of the causal model is identifiable. In other words, we could achieve what once required OICA to achieve by only conducting independence tests. Experimental results on both synthetic and real-world data demonstrate the effectiveness and reliability of our method.


Causal Coupled Mechanisms: A Control Method with Cooperation and Competition for Complex System

Yu, Xuehui, Jiang, Jingchi, Yu, Xinmiao, Guan, Yi, Li, Xue

arXiv.org Artificial Intelligence

Complex systems are ubiquitous in the real world and tend to have complicated and poorly understood dynamics. For their control issues, the challenge is to guarantee accuracy, robustness, and generalization in such bloated and troubled environments. Fortunately, a complex system can be divided into multiple modular structures that human cognition appears to exploit. Inspired by this cognition, a novel control method, Causal Coupled Mechanisms (CCMs), is proposed that explores the cooperation in division and competition in combination. Our method employs the theory of hierarchical reinforcement learning (HRL), in which 1) the high-level policy with competitive awareness divides the whole complex system into multiple functional mechanisms, and 2) the low-level policy finishes the control task of each mechanism. Specifically for cooperation, a cascade control module helps the series operation of CCMs, and a forward coupled reasoning module is used to recover the coupling information lost in the division process. On both synthetic systems and a real-world biological regulatory system, the CCM method achieves robust and state-of-the-art control results even with unpredictable random noise. Moreover, generalization results show that reusing prepared specialized CCMs helps to perform well in environments with different confounders and dynamics.


A Sufficiently Fast Algorithm for Finding Close to Optimal Junction Trees

Becker, Ann, Geiger, Dan

arXiv.org Artificial Intelligence

An algorithm is developed for finding a close to optimal junction tree of a given graph G. The algorithm has a worst case complexity O(c^k n^a) where a and c are constants, n is the number of vertices, and k is the size of the largest clique in a junction tree of G in which this size is minimized. The algorithm guarantees that the logarithm of the size of the state space of the heaviest clique in the junction tree produced is less than a constant factor off the optimal value. When k = O(log n), our algorithm yields a polynomial inference algorithm for Bayesian networks.